Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[pull] master from comfyanonymous:master #106

Open
wants to merge 1,634 commits into
base: master
Choose a base branch
from

Conversation

pull[bot]
Copy link

@pull pull bot commented Sep 18, 2023

See Commits and Changes for more details.


Created by pull[bot] (v2.0.0-alpha.1)

Can you help keep this open source service alive? 💖 Please sponsor : )

@pull pull bot added ⤵️ pull merge-conflict Resolve conflicts manually labels Sep 18, 2023
comfyanonymous and others added 28 commits November 15, 2024 12:47
* Update web content to release v1.3.44

* nit
This one should work for skipping the single layers of models like Flux
and Auraflow.

If you want to see how these models work and how many double/single layers
they have see the "ModelMerge*" nodes for the specific model.
* Update UI ScreenShot in README

* Remove legacy UI screenshot file

* nit

* nit
Add a way to reshape lora weights.

Allow weight patches to all weight not just .weight and .bias

Add a way for a lora to set a weight to a specific value.
At length=1, the LTX model can do txt2img and img2img with no other changes required.
comfyanonymous and others added 30 commits January 16, 2025 07:23
* Use `torch.special.expm1`

This function provides greater precision than `exp(x) - 1` for small values of `x`.

Found with TorchFix https://github.com/pytorch-labs/torchfix/

* Use non-alias
)

* [i18n] Add /i18n endpoint to provide all custom node translations

* Sort glob result for deterministic ordering

* Update comment
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
⤵️ pull merge-conflict Resolve conflicts manually
Projects
None yet
Development

Successfully merging this pull request may close these issues.